5,147 research outputs found

    Fabrication of nanometer-spaced electrodes using gold nanoparticles

    Full text link
    A simple and highly reproducible technique is demonstrated for the fabrication of metallic electrodes with nanometer separation. Commercially available bare gold colloidal nanoparticles are first trapped between prefabricated large-separation electrodes to form a low-resistance bridge by an ac electric field. A large dc voltage is then applied to break the bridge via electromigration at room temperature, which consistently produces gaps in the sub-10 nm range. The technique is readily applied to prefabricated electrodes with separation up to 1 micron, which can be defined using optical lithography. The simple fabrication scheme will facilitate electronic transport studies of individual nanostructures made by chemical synthesis. As an example, measurement of a thiol-coated gold nanoparticle showing a clear Coulomb staircase is presented.Comment: To appear in Appl. Phys. Lett. in Dec. 200

    Less redundant codes for variable size dictionaries

    Get PDF
    We report on a family of variable-length codes with less redundancy than the flat code used in most of the variable size dictionary-based compression methods. The length of codes belonging to this family is still bounded above by [log_2/ |D|] where |D| denotes the dictionary size. We describe three of these codes, namely, the balanced code, the phase-in-binary code (PB), and the depth-span code (DS). As the name implies, the balanced code is constructed by a height balanced tree, so it has the shortest average codeword length. The corresponding coding tree for the PB code has an interesting property that it is made of full binary phases, and thus the code can be computed efficiently using simple binary shifting operations. The DS coding tree is maintained in such a way that the coder always finds the longest extendable codeword and extends it until it reaches the maximum length. It is optimal with respect to the code-length contrast. The PB and balanced codes have almost similar improvements, around 3% to 7% which is very close to the relative redundancy in flat code. The DS code is particularly good in dealing with files with a large amount of redundancy, such as a running sequence of one symbol. We also did some empirical study on the codeword distribution in the LZW dictionary and proposed a scheme called dynamic block shifting (DBS) to further improve the codes' performance. Experiments suggest that the DBS is helpful in compressing random sequences. From an application point of view, PB code with DBS is recommended for general practical usage

    The production and decay of the top partner TT in the left-right twin higgs model at the ILC and CLIC

    Get PDF
    The left-right twin Higgs model (LRTHM) predicts the existence of the top partner TT. In this work, we make a systematic investigation for the single and pair production of this top partner TT through the processes: e^{+}e^{-}\to t\ov{T} + T\bar{t} and T\ov{T}, the neutral scalar (the SM-like Higgs boson hh or neutral pseudoscalar boson ϕ0\phi^{0}) associate productions e^{+}e^{-}\to t\ov{T}h +T\bar{t}h, T\ov{T}h, t\ov{T}\phi^{0}+T\bar{t}\phi^{0} and T\ov{T}\phi^{0}. From the numerical evaluations for the production cross sections and relevant phenomenological analysis we find that (a) the production rates of these processes, in the reasonable parameter space, can reach the level of several or tens of fb; (b) for some cases, the peak value of the resonance production cross section can be enhanced significantly and reaches to the level of pb; (c) the subsequent decay of T→ϕ+b→tbˉbT\to \phi^{+}b \to t\bar{b}b may generate typical phenomenological features rather different from the signals from other new physics models beyond the standard model(SM); and (d) since the relevant SM background is generally not large, some signals of the top partner TT predicted by the LRTHM may be detectable in the future ILC and CLIC experiments.Comment: 20pages, 15 figures and 6 Tables. Minor corrections on text. new references adde

    Directional edge and texture representations for image processing

    Get PDF
    An efficient representation for natural images is of fundamental importance in image processing and analysis. The commonly used separable transforms such as wavelets axe not best suited for images due to their inability to exploit directional regularities such as edges and oriented textural patterns; while most of the recently proposed directional schemes cannot represent these two types of features in a unified transform. This thesis focuses on the development of directional representations for images which can capture both edges and textures in a multiresolution manner. The thesis first considers the problem of extracting linear features with the multiresolution Fourier transform (MFT). Based on a previous MFT-based linear feature model, the work extends the extraction method into the situation when the image is corrupted by noise. The problem is tackled by the combination of a "Signal+Noise" frequency model, a refinement stage and a robust classification scheme. As a result, the MFT is able to perform linear feature analysis on noisy images on which previous methods failed. A new set of transforms called the multiscale polar cosine transforms (MPCT) are also proposed in order to represent textures. The MPCT can be regarded as real-valued MFT with similar basis functions of oriented sinusoids. It is shown that the transform can represent textural patches more efficiently than the conventional Fourier basis. With a directional best cosine basis, the MPCT packet (MPCPT) is shown to be an efficient representation for edges and textures, despite its high computational burden. The problem of representing edges and textures in a fixed transform with less complexity is then considered. This is achieved by applying a Gaussian frequency filter, which matches the disperson of the magnitude spectrum, on the local MFT coefficients. This is particularly effective in denoising natural images, due to its ability to preserve both types of feature. Further improvements can be made by employing the information given by the linear feature extraction process in the filter's configuration. The denoising results compare favourably against other state-of-the-art directional representations

    Kernel Truncated Regression Representation for Robust Subspace Clustering

    Get PDF
    Subspace clustering aims to group data points into multiple clusters of which each corresponds to one subspace. Most existing subspace clustering approaches assume that input data lie on linear subspaces. In practice, however, this assumption usually does not hold. To achieve nonlinear subspace clustering, we propose a novel method, called kernel truncated regression representation. Our method consists of the following four steps: 1) projecting the input data into a hidden space, where each data point can be linearly represented by other data points; 2) calculating the linear representation coefficients of the data representations in the hidden space; 3) truncating the trivial coefficients to achieve robustness and block-diagonality; and 4) executing the graph cutting operation on the coefficient matrix by solving a graph Laplacian problem. Our method has the advantages of a closed-form solution and the capacity of clustering data points that lie on nonlinear subspaces. The first advantage makes our method efficient in handling large-scale datasets, and the second one enables the proposed method to conquer the nonlinear subspace clustering challenge. Extensive experiments on six benchmarks demonstrate the effectiveness and the efficiency of the proposed method in comparison with current state-of-the-art approaches.Comment: 14 page
    • …
    corecore